Skip to content

Conversation

@IMbackK
Copy link
Collaborator

@IMbackK IMbackK commented Mar 5, 2025

This avoids conflict with internal cuda/hip runtime memory management behavior.

This fix is a stop gap fix for #12152

…replaceing it.

This avoids conflict with internal cuda/hip runtimes memory managment behavior.
@github-actions github-actions bot added Nvidia GPU Issues specific to Nvidia GPUs ggml changes relating to the ggml tensor library for machine learning labels Mar 5, 2025
@IMbackK IMbackK merged commit e721c05 into ggml-org:master Mar 6, 2025
47 checks passed
mglambda pushed a commit to mglambda/llama.cpp that referenced this pull request Mar 8, 2025
…replaceing it. (ggml-org#12209)

This avoids conflict with internal cuda/hip runtimes memory managment behavior.
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Mar 19, 2025
…replaceing it. (ggml-org#12209)

This avoids conflict with internal cuda/hip runtimes memory managment behavior.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ggml changes relating to the ggml tensor library for machine learning Nvidia GPU Issues specific to Nvidia GPUs

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants